55 research outputs found

    Queue-Based Random-Access Algorithms: Fluid Limits and Stability Issues

    Get PDF
    We use fluid limits to explore the (in)stability properties of wireless networks with queue-based random-access algorithms. Queue-based random-access schemes are simple and inherently distributed in nature, yet provide the capability to match the optimal throughput performance of centralized scheduling mechanisms in a wide range of scenarios. Unfortunately, the type of activation rules for which throughput optimality has been established, may result in excessive queue lengths and delays. The use of more aggressive/persistent access schemes can improve the delay performance, but does not offer any universal maximum-stability guarantees. In order to gain qualitative insight and investigate the (in)stability properties of more aggressive/persistent activation rules, we examine fluid limits where the dynamics are scaled in space and time. In some situations, the fluid limits have smooth deterministic features and maximum stability is maintained, while in other scenarios they exhibit random oscillatory characteristics, giving rise to major technical challenges. In the latter regime, more aggressive access schemes continue to provide maximum stability in some networks, but may cause instability in others. Simulation experiments are conducted to illustrate and validate the analytical results

    Large deviation asymptotics for occupancy problems

    Full text link
    In the standard formulation of the occupancy problem one considers the distribution of r balls in n cells, with each ball assigned independently to a given cell with probability 1/n. Although closed form expressions can be given for the distribution of various interesting quantities (such as the fraction of cells that contain a given number of balls), these expressions are often of limited practical use. Approximations provide an attractive alternative, and in the present paper we consider a large deviation approximation as r and n tend to infinity. In order to analyze the problem we first consider a dynamical model, where the balls are placed in the cells sequentially and ``time'' corresponds to the number of balls that have already been thrown. A complete large deviation analysis of this ``process level'' problem is carried out, and the rate function for the original problem is then obtained via the contraction principle. The variational problem that characterizes this rate function is analyzed, and a fairly complete and explicit solution is obtained. The minimizing trajectories and minimal cost are identified up to two constants, and the constants are characterized as the unique solution to an elementary fixed point problem. These results are then used to solve a number of interesting problems, including an overflow problem and the partial coupon collector's problem.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Probability (http://www.imstat.org/aop/) at http://dx.doi.org/10.1214/00911790400000013

    Computation Alignment: Capacity Approximation without Noise Accumulation

    Full text link
    Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by Avestimehr et al. has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor

    Capacity and Stable Scheduling in Heterogeneous Wireless Networks

    Full text link
    Heterogeneous wireless networks (HetNets) provide a means to increase network capacity by introducing small cells and adopting a layered architecture. HetNets allocate resources flexibly through time sharing and cell range expansion/contraction allowing a wide range of possible schedulers. In this paper we define the capacity of a HetNet down link in terms of the maximum number of downloads per second which can be achieved for a given offered traffic density. Given this definition we show that the capacity is determined via the solution to a continuous linear program (LP). If the solution is smaller than 1 then there is a scheduler such that the number of mobiles in the network has ergodic properties with finite mean waiting time. If the solution is greater than 1 then no such scheduler exists. The above results continue to hold if a more general class of schedulers is considered.Comment: 30 pages, 6 figure

    Distributed Adaptive Algorithms for Optimal Opportunistic Medium Access

    Get PDF
    We examine threshold-based transmission strategies for distributed opportunistic medium access in a scenario with fairly general probabilistic interference conditions. Specifically, collisions between concurrent transmissions are governed by arbitrary probabilities, allowing for a form of channel capture and covering binary interference constraints as an important special case. We address the problem of setting the threshold values so as to optimize the aggregate throughput utility of the various users, and particularly focus on a weighted logarithmic throughput utility function (Proportional Fairness). We provide an adaptive algorithm for finding the optimal threshold values in a distributed fashion, and rigorously establish the convergence of the proposed algorithm under mild statistical assumptions. Moreover, we discuss how the algorithm may be adapted to achieve packet-level stability with only limited exchange of queue length information among the various users. We also conduct extensive numerical experiments to corroborate the theoretical convergence results.14 page(s

    Scheduling of multi-antenna broadcast systems with heterogeneous users

    Get PDF
    We study the problem of efficiently scheduling users in a Gaussian broadcast channel with M transmit antennas and K independent receivers, each with a single antenna. We first focus on a scenario with two transmit antennas and statistically identical users, and analyze the gap between the full sum capacity and the rate that can be achieved by transmitting to a suitably selected pair of users. In particular, we consider a scheme that picks the user with the largest channel gain, and selects a second user from the next L - 1 strongest ones to form the best pair, taking channel orientations into account as well. We prove that the expected rate gap converges to 1/(L- 1) nats/symbol when the total number of users K tends to infinity. Allowing L to increase with K, it may be deduced that transmitting to a properly chosen pair of users is asymptotically optimal, while considerably reducing the feedback overhead and scheduling complexity. Next, we tackle the problem of maximizing a weighted sum rate in a scenario with heterogeneous user characteristics. We establish a novel upper bound for the weighted sum capacity, which we then use to show that the maximum expected weighted sum rate can be asymptotically achieved by transmitting to a suitably selected subset of at most MC users, where C denotes the number of distinct user classes. Numerical experiments indicate that the asymptotic results are remarkably accurate and that the proposed schemes operate close to absolute performance bounds, even for a moderate number of users

    Systematic review of prognostic models in traumatic brain injury

    Get PDF
    BACKGROUND: Traumatic brain injury (TBI) is a leading cause of death and disability world-wide. The ability to accurately predict patient outcome after TBI has an important role in clinical practice and research. Prognostic models are statistical models that combine two or more items of patient data to predict clinical outcome. They may improve predictions in TBI patients. Multiple prognostic models for TBI have accumulated for decades but none of them is widely used in clinical practice. The objective of this systematic review is to critically assess existing prognostic models for TBI METHODS: Studies that combine at least two variables to predict any outcome in patients with TBI were searched in PUBMED and EMBASE. Two reviewers independently examined titles, abstracts and assessed whether each met the pre-defined inclusion criteria. RESULTS: A total of 53 reports including 102 models were identified. Almost half (47%) were derived from adult patients. Three quarters of the models included less than 500 patients. Most of the models (93%) were from high income countries populations. Logistic regression was the most common analytical strategy to derived models (47%). In relation to the quality of the derivation models (n:66), only 15% reported less than 10% pf loss to follow-up, 68% did not justify the rationale to include the predictors, 11% conducted an external validation and only 19% of the logistic models presented the results in a clinically user-friendly way CONCLUSION: Prognostic models are frequently published but they are developed from small samples of patients, their methodological quality is poor and they are rarely validated on external populations. Furthermore, they are not clinically practical as they are not presented to physicians in a user-friendly way. Finally because only a few are developed using populations from low and middle income countries, where most of trauma occurs, the generalizability to these setting is limited

    Quality assurance in pathology in colorectal cancer screening and diagnosis—European recommendations

    Get PDF
    In Europe, colorectal cancer is the most common newly diagnosed cancer and the second most common cause of cancer deaths, accounting for approximately 436,000 incident cases and 212,000 deaths in 2008. The potential of high-quality screening to improve control of the disease has been recognized by the Council of the European Union who issued a recommendation on cancer screening in 2003. Multidisciplinary, evidence-based European Guidelines for quality assurance in colorectal cancer screening and diagnosis have recently been developed by experts in a pan-European project coordinated by the International Agency for Research on Cancer. The full guideline document consists of ten chapters and an extensive evidence base. The content of the chapter dealing with pathology in colorectal cancer screening and diagnosis is presented here in order to promote international discussion and collaboration leading to improvements in colorectal cancer screening and diagnosis by making the principles and standards recommended in the new EU Guidelines known to a wider scientific community
    corecore